6 research outputs found

    Synergistic policy and virtual machine consolidation in cloud data centers

    Get PDF
    In modern Cloud Data Centers (DC)s, correct implementation of network policies is crucial to provide secure, efficient and high performance services for tenants. It is reported that the inefficient management of network policies accounts for 78% of DC downtime, challenged by the dynamically changing network characteristics and by the effects of dynamic Virtual Machine (VM) consolidation. While there has been significant research in policy and VM management, they have so far been treated as disjoint research problems. In this paper, we explore the simultaneous, dynamic VM and policy consolidation, and formulate the Policy-VM Consolidation (PVC) problem, which is shown to be NP-Hard. We then propose Sync, an efficient and synergistic scheme to jointly consolidate network policies and virtual machines. Extensive evaluation results and a testbed implementation of our controller show that policy and VM migration under Sync significantly reduces flow end-to-end delay by nearly 40%, and network-wide communication cost by 50% within few seconds, while adhering strictly to the requirements of network policies

    Scalable traffic-aware virtual machine management for cloud data centers

    Get PDF
    Virtual Machine (VM) management is a powerful mechanism for providing elastic services over Cloud Data Centers (DC)s. At the same time, the resulting network congestion has been repeatedly reported as the main bottleneck in DCs, even when the overall resource utilization of the infrastructure remains low. However, most current VM management strategies are traffic-agnostic, while the few that are traffic-aware only concern a static initial allocation, ignore bandwidth oversubscription, or do not scale. In this paper we present S-CORE, a scalable VM migration algorithm to dynamically reallocate VMs to servers while minimizing the overall communication footprint of active traffic flows. We formulate the aggregate VM communication as an optimization problem and we then define a novel distributed migration scheme that iteratively adapts to dynamic traffic changes. Through extensive simulation and implementation results, we show that S-CORE achieves significant (up to 87%) communication cost reduction while incurring minimal overhead and downtime

    PLAN: Joint policy- and network-aware VM management for cloud data centers

    Get PDF
    Policies play an important role in network configuration and therefore in offering secure and high performance services especially over multi-tenant Cloud Data Center (DC) environments. At the same time, elastic resource provisioning through virtualization often disregards policy requirements, assuming that the policy implementation is handled by the underlying network infrastructure. This can result in policy violations, performance degradation and security vulnerabilities. In this paper, we define PLAN, a PoLicy-Aware and Network-aware VM management scheme to jointly consider DC communication cost reduction through Virtual Machine (VM) migration while meeting network policy requirements. We show that the problem is NP-hard and derive an efficient approximate algorithm to reduce communication cost while adhering to policy constraints. Through extensive evaluation, we show that PLAN can reduce topology-wide communication cost by 38 percent over diverse aggregate traffic and configuration policies

    Commodity single board computer clusters and their applications

    Get PDF
    © 2018 Current commodity Single Board Computers (SBCs) are sufficiently powerful to run mainstream operating systems and workloads. Many of these boards may be linked together, to create small, low-cost clusters that replicate some features of large data center clusters. The Raspberry Pi Foundation produces a series of SBCs with a price/performance ratio that makes SBC clusters viable, perhaps even expendable. These clusters are an enabler for Edge/Fog Compute, where processing is pushed out towards data sources, reducing bandwidth requirements and decentralizing the architecture. In this paper we investigate use cases driving the growth of SBC clusters, we examine the trends in future hardware developments, and discuss the potential of SBC clusters as a disruptive technology. Compared to traditional clusters, SBC clusters have a reduced footprint, are low-cost, and have low power requirements. This enables different models of deployment—particularly outside traditional data center environments. We discuss the applicability of existing software and management infrastructure to support exotic deployment scenarios and anticipate the next generation of SBC. We conclude that the SBC cluster is a new and distinct computational deployment paradigm, which is applicable to a wider range of scenarios than current clusters. It facilitates Internet of Things and Smart City systems and is potentially a game changer in pushing application logic out towards the network edge

    Next generation single board clusters

    Get PDF
    Until recently, cluster computing was too expensive and too complex for commodity users. However the phenomenal popularity of single board computers like the Raspberry Pi has caused the emergence of the single board computer cluster. This demonstration will present a cheap, practical and portable Raspberry Pi cluster called Pi Stack. We will show pragmatic custom solutions to hardware issues, such as power distribution, and software issues, such as remote updating. We also sketch potential use cases for Pi Stack and other commodity single board computer cluster architectures

    Improving data center network utilization using near-optimal traffic engineering

    No full text
    Equal cost multiple path (ECMP) forwarding is the most prevalent multipath routing used in data center (DC) networks today. However, it fails to exploit increased path diversity that can be provided by traffic engineering techniques through the assignment of nonuniform link weights to optimize network resource usage. To this extent, constructing a routing algorithm that provides path diversity over nonuniform link weights (i.e., unequal cost links), simplicity in path discovery and optimality in minimizing maximum link utilization (MLU) is nontrivial. In this paper, we have implemented and evaluated the Penalizing Exponential Flow-spliTing (PEFT) algorithm in a cloud DC environment based on two dominant topologies, canonical and fat tree. In addition, we have proposed a new cloud DC topology which, with only a marginal modification of the current canonical tree DC architecture, can further reduce MLU and increase overall network capacity utilization through PEFT routing
    corecore